该项目提出了一种自动生成视频游戏动态描述的动作模型的方法,以及与计划代理的集成,以执行和监控计划。规划者使用这些动作模型来获得许多不同视频游戏中的代理的审议行为,并与反应模块组合,解决确定性和无确定级别。实验结果验证了该方法的方法,并证明了知识工程师的努力在这种复杂域的定义中可以大大减少。此外,域名的基准已经制定,这可能对国际规划社会评估国际规划竞赛中的规划者感兴趣。
translated by 谷歌翻译
Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is extremely slow, limiting the application scenarios of utilizing NeRF on resource-constrained hardware, such as mobile devices. Many works have been conducted to reduce the latency of running NeRF models. However, most of them still require high-end GPU for acceleration or extra storage memory, which is all unavailable on mobile devices. Another emerging direction utilizes the neural light field (NeLF) for speedup, as only one forward pass is performed on a ray to predict the pixel color. Nevertheless, to reach a similar rendering quality as NeRF, the network in NeLF is designed with intensive computation, which is not mobile-friendly. In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering. We follow the setting of NeLF to train our network. Unlike existing works, we introduce a novel network architecture that runs efficiently on mobile devices with low latency and small size, i.e., saving $15\times \sim 24\times$ storage compared with MobileNeRF. Our model achieves high-resolution generation while maintaining real-time inference for both synthetic and real-world scenes on mobile devices, e.g., $18.04$ms (iPhone 13) for rendering one $1008\times756$ image of real 3D scenes. Additionally, we achieve similar image quality as NeRF and better quality than MobileNeRF (PSNR $26.15$ vs. $25.91$ on the real-world forward-facing dataset).
translated by 谷歌翻译
Can we make virtual characters in a scene interact with their surrounding objects through simple instructions? Is it possible to synthesize such motion plausibly with a diverse set of objects and instructions? Inspired by these questions, we present the first framework to synthesize the full-body motion of virtual human characters performing specified actions with 3D objects placed within their reach. Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters and outputs diverse sequences of full-body motions. This is in contrast to existing work, where full-body action synthesis methods generally do not consider object interactions, and human-object interaction methods focus mainly on synthesizing hand or finger movements for grasping objects. We accomplish our objective by designing an intent-driven full-body motion generator, which uses a pair of decoupled conditional variational autoencoders (CVAE) to learn the motion of the body parts in an autoregressive manner. We also optimize for the positions of the objects with six degrees of freedom (6DoF) such that they plausibly fit within the hands of the synthesized characters. We compare our proposed method with the existing methods of motion synthesis and establish a new and stronger state-of-the-art for the task of intent-driven motion synthesis. Through a user study, we further show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios compared to the current state-of-the-art methods, and are perceived to be as good as the ground truth on several occasions.
translated by 谷歌翻译
Conventional methods for human motion synthesis are either deterministic or struggle with the trade-off between motion diversity and motion quality. In response to these limitations, we introduce MoFusion, i.e., a new denoising-diffusion-based framework for high-quality conditional human motion synthesis that can generate long, temporally plausible, and semantically accurate motions based on a range of conditioning contexts (such as music and text). We also present ways to introduce well-known kinematic losses for motion plausibility within the motion diffusion framework through our scheduled weighting strategy. The learned latent space can be used for several interactive motion editing applications -- like inbetweening, seed conditioning, and text-based editing -- thus, providing crucial abilities for virtual character animation and robotics. Through comprehensive quantitative evaluations and a perceptual user study, we demonstrate the effectiveness of MoFusion compared to the state of the art on established benchmarks in the literature. We urge the reader to watch our supplementary video and visit https://vcai.mpi-inf.mpg.de/projects/MoFusion.
translated by 谷歌翻译
3D reconstruction and novel view synthesis of dynamic scenes from collections of single views recently gained increased attention. Existing work shows impressive results for synthetic setups and forward-facing real-world data, but is severely limited in the training speed and angular range for generating novel views. This paper addresses these limitations and proposes a new method for full 360{\deg} novel view synthesis of non-rigidly deforming scenes. At the core of our method are: 1) An efficient deformation module that decouples the processing of spatial and temporal information for acceleration at training and inference time; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field. We evaluate the proposed approach on the established synthetic D-NeRF benchmark, that enables efficient reconstruction from a single monocular view per time-frame randomly sampled from a full hemisphere. We refer to this form of inputs as monocularized data. To prove its practicality for real-world scenarios, we recorded twelve challenging sequences with human actors by sampling single frames from a synchronized multi-view rig. In both cases, our method is trained significantly faster than previous methods (minutes instead of days) while achieving higher visual accuracy for generated novel views. Our source code and data is available at our project page https://graphics.tu-bs.de/publications/kappel2022fast.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
强化学习(RL)旨在在给定环境中从奖励功能中训练代理商,但逆增强学习(IRL)试图从观察专家的行为中恢复奖励功能。众所周知,总的来说,各种奖励功能会导致相同的最佳政策,因此,IRL定义不明。但是,(Cao等,2021)表明,如果我们观察到两个或多个具有不同折现因子或在不同环境中起作用的专家,则可以在某些条件下确定奖励功能,直至常数。这项工作首先根据等级条件显示了表格MDP的多位专家的等效可识别性声明,该声明易于验证,也被证明是必要的。然后,我们将结果扩展到各种不同的方案,即,在奖励函数可以表示为给定特征的线性组合,使其更容易解释,或者当我们可以访问近似过渡矩阵时,我们会表征奖励可识别性。即使奖励无法识别,我们也提供了特征的条件,当给定环境中的多个专家的数据允许在新环境中概括和训练最佳代理。在各种数值实验中,我们对奖励可识别性和概括性的理论结果得到了验证。
translated by 谷歌翻译
多标签图像分类允许从给定图像预测一组标签。与多类分类不同,每个图像只有一个标签,此类设置适用于更广泛的应用程序。在这项工作中,我们重新审视了多标签分类的两种流行方法:基于变压器的头和标签关系信息信息图处理分支。尽管基于变压器的头被认为比基于图基的分支更好地取得了更好的结果,但我们认为,使用适当的训练策略,基于图形的方法可以证明精确度的较小,同时将计算资源减少到推理上。在我们的训练策略中,我们在角度空间中引入了其修饰作用,而不是非对称损失(ASL)(ASL),而不是非对称损失(ASL)。与二进制跨熵损失相比,它隐含地学习了每个班级单位超球的代理特征向量,从而提供更好的歧视能力。根据提出的损失和训练策略,我们在单个模态方法中获得SOTA结果,以广泛的多标签分类基准,例如MS-Coco,Pascal-Voc,Nus wide和Visual Genome 500。 OpenVino培训扩展https://github.com/openvinotoolkit/deep-object-reid/tree/tree/multilabel
translated by 谷歌翻译
本文报告了基准数据驱动的自动共鸣手势生成的第二个基因挑战。参与的团队使用相同的语音和运动数据集来构建手势生成系统。所有这些系统生成的运动都使用标准化的可视化管道将视频渲染到视频中,并在几个大型众包用户研究中进行了评估。与比较不同的研究论文不同,结果差异仅是由于方法之间的差异,从而实现了系统之间的直接比较。今年的数据集基于18个小时的全身运动捕获,包括手指,参与二元对话的不同人。十个团队参加了两层挑战:全身和上身手势。对于每个层,我们都评估了手势运动的人类风格及其对特定语音信号的适当性。我们的评估使人类的忠诚度与手势适当性解脱,这是该领域的主要挑战。评估结果是一场革命和启示。某些合成条件被评为比人类运动捕获更明显的人类样。据我们所知,这从未在高保真的头像上展示过。另一方面,发现所有合成运动比原始运动捕获记录要小得多。其他材料可通过项目网站https://youngwoo-yoon.github.io/geneachallenge2022/获得
translated by 谷歌翻译